90 research outputs found

    Amplifying Voices: Co-researchers with Learning Disabilities Use a Co-designed Survey to “have a conversation with the public”

    Full text link
    The concept of ‘giving voice’ in research and in the design of accessible technologies involving people with learning disabilities (LDs) has been often used to highlight the necessity for careful consideration of their opinions and needs. Those who ‘communicate differently’ are often portrayed as the beneficiaries of the technological advancements rather than contributors to the technology that can benefit everybody. Here, we present a case study whereby people with LDs co-designed an inclusive survey platform and created an online survey to “have a conversation with the public” and to challenge attitudes towards LDs. Over 800 participants with and without disabilities or impairments completed the survey and reflected on their learning experience. Using qualitative and quantitative methods, we found that the co-created platform enabled all – the co-researchers and the respondents – to have their ‘voices amplified’ and to be listened to in a meaningful way – just as in ‘a conversation’ between people

    Creative Coding for Audiovisual Art: The CodeCircle Platform

    Full text link
    CodeCircle is an online, web-based programming tool developed by Goldsmiths Computing. The tool is designed to be specifically tailored to the creation of practical work in computer music, computer graphics, digital signal processing, real-time interaction, interactive machine learning, games development and design. All such practices share a historical link to the field of digital audiovisual art, which is an interdisciplinary practice that mainly emerged during the twentieth century alongside the development of creative and media technologies. CodeCircle consists of a browser-based HTML5 integrated development environment (IDE) with bug detection, real-time rendering and social features. Although many such platforms exist, CodeCircle uniquely fuses interactive programming with collaborative coding, providing just in time (JIT) compilation (where available) alongside real-time, socially oriented document editing in a web browser. We define the core requirements for CodeCircle based on informed, pedagogical and creative practice needs. This is done through a brief definition of audiovisual art methods in the context of creative computing, further contextualising its position as a domain of enquiry that depends on and informs technological innovation in sound, graphics and interaction

    Antagonising explanation and revealing bias directly through sequencing and multimodal inference

    Full text link
    Deep generative models produce data according to a learned representation, e.g. diffusion models, through a process of approximation computing possible samples. Approximation can be understood as reconstruction and the large datasets used to train models as sets of records in which we represent the physical world with some data structure (photographs, audio recordings, manuscripts). During the process of reconstruction, e.g., image frames develop each timestep towards a textual input description. While moving forward in time, frame sets are shaped according to learned bias and their production, we argue here, can be considered as going back in time; not by inspiration on the backward diffusion process but acknowledging culture is specifically marked in the records. Futures of generative modelling, namely in film and audiovisual arts, can benefit by dealing with diffusion systems as a process to compute the future by inevitably being tied to the past, if acknowledging the records as to capture fields of view at a specific time, and to correlate with our own finite memory ideals. Models generating new data distributions can target video production as signal processors and by developing sequences through timelines we ourselves also go back to decade-old algorithmic and multi-track methodologies revealing the actual predictive failure of contemporary approaches to synthesis in moving image, both as relevant to composition and not explanatory.Comment: 3 pages, no figures. ACM C&C 23 Workshop pape

    Amplifying The Uncanny

    Get PDF
    Deep neural networks have become remarkably good at producing realistic deepfakes, images of people that (to the untrained eye) are indistinguishable from real images. Deepfakes are produced by algorithms that learn to distinguish between real and fake images and are optimised to generate samples that the system deems realistic. This paper, and the resulting series of artworks Being Foiled explore the aesthetic outcome of inverting this process, instead optimising the system to generate images that it predicts as being fake. This maximises the unlikelihood of the data and in turn, amplifies the uncanny nature of these machine hallucinations

    Autoencoding Video Frames

    Get PDF
    This report details the implementation of an autoencoder trained with a learned similarity metric - one that is capable of modelling a complex dis- tribution of natural images - training it on frames from selected films, and using it to reconstruct video sequences by passing each frame through the autoencoder and re-sequencing the output frames in-order. This is primarily an artistic exploration of the representational capacity of the current state of the art in generative models and is a novel application of autoencoders. This model is trained on, and used to reconstruct the films Blade Runner and A Scanner Darkly, producing new artworks in their own right. Experiments passing other videos through these models is carried out, demonstrating the potential of this method to become a new technique in the production of experimental image and video

    Real-time interactive sequence generation and control with Recurrent Neural Network ensembles

    Full text link
    Recurrent Neural Networks (RNN), particularly Long Short Term Memory (LSTM) RNNs, are a popular and very successful method for learning and generating sequences. However, current generative RNN techniques do not allow real-time interactive control of the sequence generation process, thus aren’t well suited for live creative expression. We propose a method of real-time continuous control and ‘steering’ of sequence generation using an ensemble of RNNs and dynamically altering the mixture weights of the models. We demonstrate the method using character based LSTM networks and a gestural interface allowing users to ‘conduct’ the generation of tex
    • 

    corecore